Skip to content

Conversation

@ARR4N
Copy link
Collaborator

@ARR4N ARR4N commented Nov 17, 2025

Introduces an integration point between txpool.TxPool, saexec.Executor and gossip.Set, with a simple transaction-prioritisation mechanism. Any specific rules for when to reject a tx from the mempool will likely require some libevm mods so can follow later.

The consensus worst-case validation will still need to track "in-flight" txs as the LegacyPool will only remove them once included. An enqueueing subscription mechanism is added for this purpose.

ARR4N and others added 30 commits September 26, 2025 13:00
Co-authored-by: Stephen Buttolph <[email protected]>
Signed-off-by: Arran Schlosberg <[email protected]>
Co-authored-by: Stephen Buttolph <[email protected]>
Signed-off-by: Arran Schlosberg <[email protected]>
@ARR4N ARR4N marked this pull request as ready for review November 19, 2025 12:07
@ARR4N ARR4N self-assigned this Nov 19, 2025
@ARR4N ARR4N added the acp-194 label Nov 19, 2025
ARR4N and others added 3 commits November 19, 2025 17:33
Base automatically changed from arr4n/saexec to main November 19, 2025 20:10
Copy link
Contributor

@StephenButtolph StephenButtolph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will continue reviewing tomorrow

// to release resources.
func NewSet(logger logging.Logger, pool *txpool.TxPool, bloom *gossip.BloomFilter, targetBloomElements int) (*Set, func() error) {
txs := make(chan core.NewTxsEvent)
sub := pool.SubscribeTransactions(txs, false)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like this only receives txs upon mempool reorgs. I would have thought that the bloom filter should update as soon as we add the tx to the pool to avoid unnecessary gossip

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've demonstrated in TestTxPoolPopulatesBloomFilter() that it will send new txs as well and that the filter receives them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, my original comment wasn't questioning whether or not the txs would get added to the bloom filter eventually. My comment was questioning whether or not it is smart to wait until a mempool reorg happens to populate the bloom filter.

In all of our current mempool implementations, we include the transactions into the bloom filter prior to returning from Add.

Comment on lines 90 to 92
for _, tx := range ev.Txs {
bloom.Add(Transaction{tx})
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This only seems to add the newly issued transactions into the bloom filter, don't we need to reintroduce all the transactions into the bloom filter?

Copy link
Collaborator Author

@ARR4N ARR4N Nov 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup; this is now done in NewSet() and tested in TestTxPoolPopulatesBloomFilter().

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I think it is good that we initialize the bloom filter in NewSet, I was referring to populating the bloom filter with existing transactions in the mempool after we reset the bloom filter.

ev.Txs doesn't include transactions that were previously in the mempool. So whenever we reset the bloom filter, we are going to start receiving (permanently, until those txs are removed from the mempool) duplicate tx gossip.

for {
select {
case ev := <-txs:
if _, err := gossip.ResetBloomFilterIfNeeded(bloom, targetBloomElements); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is probably more robust to use a multiple of the current mempool size as a lower bound here. Otherwise we could get into a situation where we end up resetting every time we see a new tx

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, nice idea. I had no idea how to set this (hence making it an argument). I've arbitrarily gone with 2x pool size but could always reintroduce an argument to configure that.

Comment on lines +86 to +96
select {
case b := <-p.in:
select {
case p.out <- core.ChainHeadEvent{Block: b}:

case <-p.quit:
return
}
case <-p.quit:
return
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this makes the "receive" followed by "send" logic a bit more clear

Suggested change
select {
case b := <-p.in:
select {
case p.out <- core.ChainHeadEvent{Block: b}:
case <-p.quit:
return
}
case <-p.quit:
return
}
var b *types.Block
select {
case b = <-p.in:
case <-p.quit:
return
}
select {
case p.out <- core.ChainHeadEvent{Block: b}:
case <-p.quit:
return
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a strong dislike of the "variable look-back" pattern as it causes a non-linearity in reading the code.

I'm being hyperbolic here, but this is the cognitive pattern I'm trying to avoid:

  1. Push b to my brain stack without knowing how it will be used (context aids memory).
  2. Set some b, making sure to notice that it's = and not :=, recalling my earlier stack push.
  3. Parse some unrelated "distraction" code, keeping b at the back of my mind.
  4. Use b.

vs

  1. Declare and set b, pushing it to my brain stack but with context as to where it came from.
  2. Use b.

Granted in this particular solution it's not all that significant because the "distraction" code is minimal, but I still feel like it's an anti-pattern. All that said, I'm open to having my mind changed if you think there are sufficient reasons to change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fwiw, using := rather than = won't compile.

Would you be okay with some helper functions instead? Something like:

func (p *pipe) loop() {
	defer close(p.done)
	for {
		b, ok := p.read()
		if !ok {
			return
		}

		if !p.write(b) {
			return
		}
	}
}

func (p *pipe) read() (*types.Block, bool) {
	select {
	case b := <-p.in:
		return b, true
	case <-p.quit:
		return nil, false
	}
}

func (p *pipe) write(b *types.Block) bool {
	select {
	case p.out <- core.ChainHeadEvent{Block: b}:
		return true
	case <-p.quit:
		return false
	}
}

It's much more verbose... But it doesn't have your (perhaps limited to you :wink) meat stack issue

Comment on lines +52 to +54
func (bc *blockchain) StateAt(root common.Hash) (*state.StateDB, error) {
return state.New(root, bc.exec.StateCache(), nil)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, with this PR the mempool's validity checking is based on the settled state right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Intent is currently undefined. Sorry, I should have been clearer in what I meant in the PR description (i.e. I'm kicking that can down the road entirely):

Any specific rules for when to reject a tx from the mempool will likely require some libevm mods so can follow later.

The p2p architecture and gossip was all so new to me that I found this PR relatively difficult, so wanted to focus on learning it and getting the wiring correct. I'll concentrate on things like validity when we have the VM to orchestrate all the moving parts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants